8 research outputs found

    Unsupervised Image Regression for Heterogeneous Change Detection

    Get PDF
    Change detection (CD) in heterogeneous multitemporal satellite images is an emerging and challenging topic in remote sensing. In particular, one of the main challenges is to tackle the problem in an unsupervised manner. In this paper, we propose an unsupervised framework for bitemporal heterogeneous CD based on the comparison of affinity matrices and image regression. First, our method quantifies the similarity of affinity matrices computed from colocated image patches in the two images. This is done to automatically identify pixels that are likely to be unchanged. With the identified pixels as pseudotraining data, we learn a transformation to map the first image to the domain of the other image and vice versa. Four regression methods are selected to carry out the transformation: Gaussian process regression, support vector regression, random forest regression (RFR), and a recently proposed kernel regression method called homogeneous pixel transformation. To evaluate the potentials and limitations of our framework and also the benefits and disadvantages of each regression method, we perform experiments on two real data sets. The results indicate that the comparison of the affinity matrices can already be considered a CD method by itself. However, image regression is shown to improve the results obtained by the previous step alone and produces accurate CD maps despite of the heterogeneity of the multitemporal input data. Notably, the RFR approach excels by achieving similar accuracy as the other methods, but with a significantly lower computational cost and with fast and robust tuning of hyperparameters

    Deep Image Translation With an Affinity-Based Change Prior for Unsupervised Multimodal Change Detection

    Get PDF
    © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Image translation with convolutional neural networks has recently been used as an approach to multimodal change detection. Existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. A main challenge in the unsupervised problem setting is to avoid that change pixels affect the learning of the translation function. We propose two new network architectures trained with loss functions weighted by priors that reduce the impact of change pixels on the learning objective. The change prior is derived in an unsupervised fashion from relational pixel information captured by domain-specific affinity matrices. Specifically, we use the vertex degrees associated with an absolute affinity difference matrix and demonstrate their utility in combination with cycle consistency and adversarial training. The proposed neural networks are compared with the state-of-the-art algorithms. Experiments conducted on three real data sets show the effectiveness of our methodology

    Unsupervised Change Detection in Heterogeneous Remote Sensing Imagery

    No full text
    Change detection is a thriving and challenging topic in remote sensing for Earth observation. The goal is to identify changes that happen on the Earth by comparing two or more satellite or aerial images acquired at different times. Traditional methods rely on homogeneous data, that is, images acquired by the same sensor, under the same geometry, seasonal conditions,and recording configurations. However, the assumption of homogeneity does not hold true for many practical examples and applications, and in particular when different sensors are involved. This represents a significant limitation, both in terms of response time to sudden events and in terms of temporal resolution when monitoring long-term trends. The alternative is to combine heterogeneous data, which on one hand allows to fully exploit the capabilities of all the available sensors, but on the other hand raises additional technical challenges. Indeed, heterogeneous sources imply different data domains, diverse statistical distributions and inconsistent surface signatures across the various image acquisitions. This thesis tries to explore the kinds of techniques meant to cope with these issues, which are referred to as heterogeneous change detection methods. Specifically, the effort is dedicated to unsupervised learning, the branch of machine learning which does not rely on any prior knowledge about the data. This problem setting is as challenging as important, in order to tackle the task in the most automatic way without relying on any user interaction. The main novelty driving this study is that the comparison of affinity matrices can be used to define crossdomain similarities based on pixel relations rather than the direct comparison of radiometry values. Starting from this fundamental idea, the research endeavours presented in this thesis result in the formulation of three methodologies that prove themselves reliable and perform favourably when compared to the state-of-the-art. These methods leverage this affinity matrix comparison and incorporate both conventional machine learning techniques and more contemporary deep learning architectures to tackle the problem of unsupervised heterogeneous change detection

    Toward Targeted Change Detection with Heterogeneous Remote Sensing Images for Forest Mortality Mapping

    Get PDF
    Several generic methods have recently been developed for change detection in heterogeneous remote sensing data, such as images from synthetic aperture radar (SAR) and multispectral radiometers. However, these are not well-suited to detect weak signatures of certain disturbances of ecological systems. To resolve this problem we propose a new approach based on image-to-image translation and one-class classification (OCC). We aim to map forest mortality caused by an outbreak of geometrid moths in a sparsely forested forest-tundra ecotone using multisource satellite images. The images preceding and following the event are collected by Landsat-5 and RADARSAT-2, respectively. Using a recent deep learning method for change-aware image translation, we compute difference images in both satellites’ respective domains. These differences are stacked with the original pre- and post-event images and passed to an OCC trained on a small sample from the targeted change class. The classifier produces a credible map of the complex pattern of forest mortality

    Clinically relevant features for predicting the severity of surgical site infections

    No full text
    Surgical site infections are hospital-acquired infections resulting in severe risk for patients and significantly increased costs for healthcare providers. In this work, we show how to leverage irregularly sampled preoperative blood tests to predict, on the day of surgery, a future surgical site infection and its severity. Our dataset is extracted from the electronic health records of patients who underwent gastrointestinal surgery and developed either deep, shallow or no infection. We represent the patients using the concentrations of fourteen common blood components collected over the four weeks preceding the surgery partitioned into six time windows. A gradient boosting based classifier trained on our new set of features reports, respectively, an AUROC of 0:991 and 0:937 at predicting a postoperative infection and the severity thereof. Further analyses support the clinical relevance of our approach as the most important features describe the nutritional status and the liver function over the two weeks prior to surgery

    Predicting Regions of Local Recurrence in Glioblastomas Using Voxel-Based Radiomic Features of Multiparametric Postoperative MRI

    Get PDF
    The globally accepted surgical strategy in glioblastomas is removing the enhancing tumor. However, the peritumoral region harbors infiltration areas responsible for future tumor recurrence. This study aimed to evaluate a predictive model that identifies areas of future recurrence using a voxel-based radiomics analysis of magnetic resonance imaging (MRI) data. This multi-institutional study included a retrospective analysis of patients diagnosed with glioblastoma who underwent surgery with complete resection of the enhancing tumor. Fifty-five patients met the selection criteria. The study sample was split into training (N = 40) and testing (N = 15) datasets. Follow-up MRI was used for ground truth definition, and postoperative structural multiparametric MRI was used to extract voxel-based radiomic features. Deformable coregistration was used to register the MRI sequences for each patient, followed by segmentation of the peritumoral region in the postoperative scan and the enhancing tumor in the follow-up scan. Peritumoral voxels overlapping with enhancing tumor voxels were labeled as recurrence, while non-overlapping voxels were labeled as nonrecurrence. Voxel-based radiomic features were extracted from the peritumoral region. Four machine learning-based classifiers were trained for recurrence prediction. A region-based evaluation approach was used for model evaluation. The Categorical Boosting (CatBoost) classifier obtained the best performance on the testing dataset with an average area under the curve (AUC) of 0.81 ± 0.09 and an accuracy of 0.84 ± 0.06, using region-based evaluation. There was a clear visual correspondence between predicted and actual recurrence regions. We have developed a method that accurately predicts the region of future tumor recurrence in MRI scans of glioblastoma patients. This could enable the adaptation of surgical and radiotherapy treatment to these areas to potentially prolong the survival of these patients
    corecore